Skip to main content
LiveView applications can handle thousands of concurrent users with proper optimization. This guide covers proven techniques for maximizing performance.

Understanding the Rendering Pipeline

LiveView’s rendering happens in stages:
  1. Event received - Client sends event
  2. Callback executed - handle_event/3 updates assigns
  3. Render triggered - Template is rendered
  4. Diff calculated - Changed parts identified
  5. Patch sent - Minimal diff sent to client
  6. DOM updated - Client patches DOM
Optimizations target each stage.

Assign Management

Use Streams for Large Collections

Avoid keeping large lists in assigns:
# BAD: Keeps all posts in memory
def mount(_params, _session, socket) do
  posts = Blog.list_posts()  # 10,000 posts
  {:ok, assign(socket, :posts, posts)}
end
# GOOD: Streams are freed after render
def mount(_params, _session, socket) do
  posts = Blog.list_posts()
  {:ok, stream(socket, :posts, posts)}
end
Memory comparison:
  • Assigns: 10,000 posts × 500 bytes = ~5 MB per LiveView
  • Streams: Freed immediately after render = ~0 bytes

Assign Only What Changes

Don’t re-assign unchanged data:
# BAD: Re-assigns user even if unchanged
def handle_event("increment", _, socket) do
  {:noreply,
   socket
   |> assign(:count, socket.assigns.count + 1)
   |> assign(:user, socket.assigns.user)}  # Unnecessary!
end
# GOOD: Only assign what changed
def handle_event("increment", _, socket) do
  {:noreply, assign(socket, :count, socket.assigns.count + 1)}
end

Use assign_new/3 for Expensive Operations

Avoid recalculating on re-mounts:
def mount(_params, _session, socket) do
  {:ok,
   socket
   |> assign_new(:categories, fn -> Blog.list_categories() end)
   |> assign_new(:tags, fn -> Blog.list_tags() end)}
end
From the docs: assign_new/3 only assigns if the key doesn’t exist.

Change Tracking Optimization

Enable Granular Tracking

Structure templates for minimal updates:
<!-- BAD: Entire card re-renders if any field changes -->
<div class="card">
  <h1>{@post.title}</h1>
  <p>{@post.body}</p>
  <span>{@post.view_count} views</span>
</div>
<!-- GOOD: Only changed parts re-render -->
<div class="card">
  <h1>{@post.title}</h1>
  <p>{@post.body}</p>
  <.live_component id={"views-#{@post.id}"} module={ViewCounter} view_count={@post.view_count} />
</div>
Now updating view_count only re-renders the component.

Use Keys in Comprehensions

Enable efficient list updates:
<!-- BAD: Entire list re-renders on insert -->
<div :for={post <- @posts} id={post.id}>
  {post.title}
</div>
<!-- GOOD: Only new/changed posts re-render -->
<div :for={post <- @posts} :key={post.id} id={post.id}>
  {post.title}
</div>

Database Query Optimization

Preload Associations

Avoid N+1 queries:
# BAD: N+1 queries
def mount(_params, _session, socket) do
  posts = Blog.list_posts()  # 1 query
  # Then N queries in template when accessing post.author
  {:ok, assign(socket, :posts, posts)}
end
# GOOD: Single query with preload
def mount(_params, _session, socket) do
  posts = Blog.list_posts() |> Repo.preload(:author)  # 2 queries total
  {:ok, assign(socket, :posts, posts)}
end

Paginate Results

Limit initial data load:
def mount(_params, _session, socket) do
  {:ok,
   socket
   |> assign(page: 1, per_page: 20)
   |> load_posts()}
end

defp load_posts(socket) do
  page = socket.assigns.page
  per_page = socket.assigns.per_page

  posts =
    Blog.list_posts()
    |> limit(^per_page)
    |> offset(^((page - 1) * per_page))
    |> Repo.all()

  stream(socket, :posts, posts)
end

Use Indexes

Ensure database columns are indexed:
# migrations/add_indexes.exs
def change do
  create index(:posts, [:user_id])
  create index(:posts, [:published_at])
  create index(:posts, [:category_id])
end

Component Optimization

Use Function Components for Static Content

Function components are faster than LiveComponents:
# GOOD: No state needed
attr :title, :string, required: true
attr :body, :string, required: true

def post_card(assigns) do
  ~H"""
  <div class="card">
    <h2>{@title}</h2>
    <p>{@body}</p>
  </div>
  """
end
Use LiveComponents only when state or event handling is needed.

Minimize Component Re-renders

Pass only necessary assigns:
# BAD: Component re-renders on any assign change
<.live_component id="user-form" module={UserForm} {assigns} />
# GOOD: Re-renders only when user or changeset changes
<.live_component id="user-form" module={UserForm} user={@user} changeset={@changeset} />

Implement update_many/1

Batch component updates:
defmodule MyAppWeb.PostComponent do
  use Phoenix.LiveComponent

  # Called once for all instances
  def update_many(assigns_sockets) do
    # Batch fetch data
    post_ids = Enum.map(assigns_sockets, fn {assigns, _socket} -> assigns.post_id end)
    posts = Blog.get_posts_by_ids(post_ids) |> Map.new(&{&1.id, &1})

    # Update each socket
    Enum.map(assigns_sockets, fn {assigns, socket} ->
      post = Map.get(posts, assigns.post_id)
      assign(socket, :post, post)
    end)
  end
end

Template Optimization

Avoid Variables in Templates

Variables disable change tracking:
<!-- BAD: Entire section re-renders always -->
<% total = @items |> Enum.map(& &1.price) |> Enum.sum() %>
<div>Total: {total}</div>
# GOOD: Assign in callback
def handle_event("update", _, socket) do
  items = socket.assigns.items
  total = items |> Enum.map(& &1.price) |> Enum.sum()
  {:noreply, assign(socket, :total, total)}
end
<div>Total: {@total}</div>

Minimize Nested Components

Deep nesting slows rendering:
<!-- BAD: Deep nesting -->
<.card>
  <.section>
    <.item>
      <.detail>
        <.value>{@data}</.value>
      </.detail>
    </.item>
  </.section>
</.card>
<!-- GOOD: Flatter structure -->
<div class="card">
  <div class="section">
    <div class="item">
      <span class="value">{@data}</span>
    </div>
  </div>
</div>

Network Optimization

Debounce Events

Limit event frequency:
# In mount/3
def mount(_params, _session, socket) do
  {:ok, socket, temporary_assigns: [search_results: []]}
end
<input
  type="text"
  phx-change="search"
  phx-debounce="300"
  value={@query}
/>

Use Temporary Assigns

Free large assigns after render:
def mount(_params, _session, socket) do
  {:ok,
   socket
   |> assign(:query, "")
   |> assign(:results, [])
   |> assign_temporary_assigns([:results])}
end

def handle_event("search", %{"query" => query}, socket) do
  results = Search.search(query)  # Large dataset
  {:noreply, assign(socket, query: query, results: results)}
  # results are freed after render
end

Batch Updates

Combine multiple assigns:
# BAD: Three renders
def handle_event("save", params, socket) do
  socket =
    socket
    |> assign(:loading, true)
    |> assign(:error, nil)
    |> assign(:data, Data.save(params))

  {:noreply, socket}
end
# GOOD: One render
def handle_event("save", params, socket) do
  data = Data.save(params)

  {:noreply, assign(socket, loading: false, error: nil, data: data)}
end

Client-Side Optimization

Use phx-update=“ignore”

Prevent re-rendering static content:
<div id="chart" phx-update="ignore">
  <!-- Chart.js renders here once -->
  <canvas id="my-chart"></canvas>
</div>

Implement Client Hooks

Move expensive operations to the client:
// assets/js/hooks.js
export const ChartHook = {
  mounted() {
    this.chart = new Chart(this.el, this.chartConfig())

    this.handleEvent("update_chart", ({data}) => {
      this.chart.data = data
      this.chart.update()
    })
  },

  destroyed() {
    this.chart.destroy()
  }
}
# Send updates without re-rendering
def handle_event("refresh", _, socket) do
  data = Analytics.get_chart_data()
  {:noreply, push_event(socket, "update_chart", %{data: data})}
end

Async Operations

Use Async Assigns

Avoid blocking mount:
def mount(_params, _session, socket) do
  {:ok,
   socket
   |> assign(:loading, true)
   |> assign_async(:stats, fn -> {:ok, %{stats: Analytics.get_stats()}} end)}
end
<div :if={@loading}>Loading...</div>
<div :if={!@loading}>
  <.async_result :let={stats} assign={@stats}>
    <:loading>Fetching stats...</:loading>
    <:failed :let={_reason}>Failed to load</:failed>
    Stats: {stats.total}
  </.async_result>
</div>

Run Background Tasks

Avoid blocking callbacks:
def handle_event("process", params, socket) do
  # BAD: Blocks for 5 seconds
  # result = ExpensiveJob.run(params)

  # GOOD: Run in background
  Task.Supervisor.start_child(MyApp.TaskSupervisor, fn ->
    result = ExpensiveJob.run(params)
    send(self(), {:job_complete, result})
  end)

  {:noreply, assign(socket, :processing, true)}
end

def handle_info({:job_complete, result}, socket) do
  {:noreply, assign(socket, processing: false, result: result)}
end

Memory Management

Monitor LiveView Processes

Track memory usage:
iex> Phoenix.LiveView.Debug.list_liveviews()
     |> Enum.map(fn %{pid: pid} ->
       {:memory, bytes} = Process.info(pid, :memory)
       {pid, bytes / 1024 / 1024}
     end)
[
  {#PID<0.123.0>, 2.5},   # 2.5 MB
  {#PID<0.124.0>, 15.2},  # 15 MB! 🚨
  {#PID<0.125.0>, 1.8}
]

Use Streams Over Assigns

Benchmark comparison:
Benchee.run(%{
  "assigns with 1000 items" => fn ->
    items = Enum.map(1..1000, &%{id: &1, name: "Item #{&1}"})
    socket = assign(socket, :items, items)
    Phoenix.LiveView.Renderer.to_rendered(socket, MyLive)
  end,
  "stream with 1000 items" => fn ->
    items = Enum.map(1..1000, &%{id: &1, name: "Item #{&1}"})
    socket = stream(socket, :items, items)
    Phoenix.LiveView.Renderer.to_rendered(socket, MyLive)
  end
})

# Results:
# assigns: 150 MB memory, 250ms
# stream:  15 MB memory, 180ms

Benchmarking

Measure Callback Performance

def handle_event("search", params, socket) do
  {time, results} = :timer.tc(fn -> Search.search(params) end)

  if time > 1_000_000 do  # > 1 second
    Logger.warning("Slow search: #{time / 1_000}ms")
  end

  {:noreply, assign(socket, :results, results)}
end

Profile with Telemetry

:telemetry.attach(
  "liveview-perf",
  [:phoenix, :live_view, :handle_event, :stop],
  fn _event, %{duration: duration}, metadata, _config ->
    if duration > 1_000_000_000 do  # > 1 second
      Logger.warning("""
      Slow event handler:
        Event: #{metadata.event}
        View: #{inspect(metadata.socket.view)}
        Duration: #{duration / 1_000_000}ms
      """)
    end
  end,
  nil
)

Production Optimizations

Enable ETS for Sessions

# config/prod.exs
config :my_app, MyAppWeb.Endpoint,
  live_view: [
    signing_salt: "SECRET_SALT"
  ],
  session_store: :ets

Tune VM Settings

# rel/env.sh.eex
export ERL_OPTS="
  +sbwt very_long
  +sbwtdcpu very_long
  +swt very_low
  +sub true
  +JPperf true
"

Use HTTP/2

Enable multiplexing:
# config/prod.exs
config :my_app, MyAppWeb.Endpoint,
  http: [transport_options: [socket_opts: [:inet6]]],
  url: [host: "example.com", port: 443, scheme: "https"],
  https: [
    port: 443,
    cipher_suite: :strong,
    certfile: "priv/cert/cert.pem",
    keyfile: "priv/cert/key.pem",
    protocol_options: [protocol: :h2]  # Enable HTTP/2
  ]

Performance Checklist

Before deploying:
  • Use streams for lists > 50 items
  • Add :key to comprehensions
  • Preload database associations
  • Debounce rapid events
  • Use temporary assigns for large data
  • Implement update_many/1 for components
  • Avoid variables in templates
  • Monitor LiveView memory usage
  • Enable HTTP/2
  • Profile slow callbacks
Measure results:
# Load test with k6
k6 run --vus 100 --duration 30s loadtest.js

Real-World Performance

Typical optimizations yield:
OptimizationMemory ReductionLatency Improvement
Streams vs assigns80-95%20-40%
Change trackingN/A60-90%
Database preloading10-20%50-80%
Temporary assigns30-70%10-20%
Component batching5-15%30-50%

Summary

Optimizing LiveView applications involves:
  1. Efficient assigns: Use streams, temporary assigns, and assign_new/3
  2. Change tracking: Enable granular updates with proper template structure
  3. Database optimization: Preload associations and paginate
  4. Component design: Prefer function components, pass minimal assigns
  5. Network efficiency: Debounce, batch, and use async operations
  6. Client-side hooks: Move expensive operations to the browser
  7. Monitoring: Track memory and profile performance
Follow these practices to build LiveView apps that scale to thousands of concurrent users.